TODO
s_1ha_res <- qresid(s_1ha)
par(mfrow = c(2,2))
#histogram
plot(density(s_1ha_res))
#QQ plot
qqnorm(s_1ha_res); qqline(s_1ha_res)
#Fitted vs. residuals--binned residuals plot
binnedplot(fitted(s_1ha), residuals(s_1ha, type = "response"))
pdf(NULL)
gam.check(s_1ha)
##
## Method: fREML Optimizer: perf newton
## full convergence after 13 iterations.
## Gradient range [-5.769935e-05,5.853239e-06]
## (score 12515.55 & scale 1).
## Hessian positive definite, eigenvalue range [1.466609e-05,1.038272].
## Model rank = 85 / 85
##
## Basis dimension (k) checking results. Low p-value (k-index<1) may
## indicate that k is too low, especially if edf is close to k'.
##
## k' edf k-index p-value
## s(log_size_prev) 9.00e+00 2.86e+00 0.93 0.34
## s(plot) 4.00e+00 1.27e-04 NA NA
## s(spei_history,L) 7.00e+01 4.97e+00 NA NA
dev.off()
## quartz_off_screen
## 2
Basis dimension checking with gam.check() doesn’t appear to work for dlnm crossbasis smooths. Instead I’ll use a method described in the help file ?choose.k to check for adequate knots. Unfortunately, bs = "cs" doesn’t work with dlnm, so I’ll use select = TRUE instead to reduce chance of overfitting.
# looking for near zero edf
gam(s_1ha_res ~ s(log_size_prev, k = 10, bs = "cs"), gamma = 1.4,
data = model.frame(s_1ha)) #fine
##
## Family: gaussian
## Link function: identity
##
## Formula:
## s_1ha_res ~ s(log_size_prev, k = 10, bs = "cs")
##
## Estimated degrees of freedom:
## 0.153 total = 1.15
##
## GCV score: 0.9861161
gam(s_1ha_res ~ s(spei_history, L,
bs = "cb",
k = c(3, 35),
xt = list(bs = "cr")),
gamma = 1.4,
select = TRUE,
data = model.frame(s_1ha)) #fine
##
## Family: gaussian
## Link function: identity
##
## Formula:
## s_1ha_res ~ s(spei_history, L, bs = "cb", k = c(3, 35), xt = list(bs = "cr"))
##
## Estimated degrees of freedom:
## 0 total = 1
##
## GCV score: 0.9861201
summary(s_1ha)
##
## Family: binomial
## Link function: logit
##
## Formula:
## surv ~ flwr_prev + s(log_size_prev, bs = "cr") + s(plot, bs = "re") +
## s(spei_history, L, bs = "cb", k = c(3, 35), xt = list(bs = "cr"))
##
## Parametric coefficients:
## Estimate Std. Error z value Pr(>|z|)
## (Intercept) 3.17262 0.07856 40.386 <2e-16 ***
## flwr_prev1 -0.51788 0.44094 -1.174 0.24
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Approximate significance of smooth terms:
## edf Ref.df Chi.sq p-value
## s(log_size_prev) 2.8566448 9 359.56 <2e-16 ***
## s(plot) 0.0001271 3 0.00 0.507
## s(spei_history,L) 4.9668620 27 57.58 <2e-16 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## R-sq.(adj) = 0.0599 Deviance explained = 13.2%
## fREML = 12516 Scale est. = 1 n = 9183
draw(s_1ha)
## Warning: Removed 910 rows containing non-finite values (stat_contour).
s_cf_res <- qresid(s_cf)
par(mfrow = c(2,2))
#histogram
plot(density(s_cf_res))
#QQ plot
qqnorm(s_cf_res); qqline(s_cf_res)
#Fitted vs. residuals--binned residuals plot
binnedplot(fitted(s_cf), residuals(s_cf, type = "response"))
### Basis dimensions
pdf(NULL)
gam.check(s_cf)
##
## Method: fREML Optimizer: perf newton
## full convergence after 12 iterations.
## Gradient range [-8.212922e-09,1.675182e-06]
## (score 43592.6 & scale 1).
## Hessian positive definite, eigenvalue range [0.3172497,1.876293].
## Model rank = 87 / 87
##
## Basis dimension (k) checking results. Low p-value (k-index<1) may
## indicate that k is too low, especially if edf is close to k'.
##
## k' edf k-index p-value
## s(log_size_prev) 9.00 3.46 0.96 0.84
## s(plot) 6.00 4.40 NA NA
## s(spei_history,L) 70.00 11.10 NA NA
dev.off()
## quartz_off_screen
## 2
# looking for near zero edf
gam(s_cf_res ~ s(log_size_prev, k = 10, bs = "cs"), gamma = 1.4,
data = model.frame(s_cf)) #fine
##
## Family: gaussian
## Link function: identity
##
## Formula:
## s_cf_res ~ s(log_size_prev, k = 10, bs = "cs")
##
## Estimated degrees of freedom:
## 0.421 total = 1.42
##
## GCV score: 1.003405
gam(s_cf_res ~ s(spei_history, L,
bs = "cb",
k = c(3, 35),
xt = list(bs = "cr")),
gamma = 1.4,
select = TRUE,
data = model.frame(s_cf))
##
## Family: gaussian
## Link function: identity
##
## Formula:
## s_cf_res ~ s(spei_history, L, bs = "cb", k = c(3, 35), xt = list(bs = "cr"))
##
## Estimated degrees of freedom:
## 13.3 total = 14.26
##
## GCV score: 1.003749
summary(s_cf)
##
## Family: binomial
## Link function: logit
##
## Formula:
## surv ~ flwr_prev + s(log_size_prev, bs = "cr") + s(plot, bs = "re") +
## s(spei_history, L, bs = "cb", k = c(3, 35), xt = list(bs = "cr"))
##
## Parametric coefficients:
## Estimate Std. Error z value Pr(>|z|)
## (Intercept) 3.41882 0.14056 24.323 <2e-16 ***
## flwr_prev1 0.09828 0.26873 0.366 0.715
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Approximate significance of smooth terms:
## edf Ref.df Chi.sq p-value
## s(log_size_prev) 3.455 9 1976.65 <2e-16 ***
## s(plot) 4.404 5 42.83 <2e-16 ***
## s(spei_history,L) 11.101 20 180.47 <2e-16 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## R-sq.(adj) = 0.113 Deviance explained = 20%
## fREML = 43593 Scale est. = 1 n = 31701
draw(s_cf)
## Warning: Removed 939 rows containing non-finite values (stat_contour).
Here I use a random sub-sample to check that differences between continuous forest and fragments are not purely due to sample size differences, particularly differences in the complexity of the crossbasis smooth.
summary(s_1ha)$edf[3]
## [1] 4.966862
summary(s_cf)$edf[3]
## [1] 11.1015
summary(s_cf_sub)$edf[3]
## [1] 1.930495
draw(s_cf_sub)
## Warning: Removed 910 rows containing non-finite values (stat_contour).
Surface still looks different in a similar way though.
appraise(g_1ha)
pdf(NULL)
gam.check(g_1ha)
##
## Method: fREML Optimizer: perf newton
## full convergence after 14 iterations.
## Gradient range [-2.066654e-10,7.391816e-08]
## (score 12196.75 & scale 1).
## Hessian positive definite, eigenvalue range [0.08811498,3.622954].
## Model rank = 85 / 85
##
## Basis dimension (k) checking results. Low p-value (k-index<1) may
## indicate that k is too low, especially if edf is close to k'.
##
## k' edf k-index p-value
## s(log_size_prev) 9.00 4.19 1 0.56
## s(plot) 4.00 2.70 NA NA
## s(spei_history,L) 70.00 18.39 NA NA
dev.off()
## quartz_off_screen
## 2
g_1ha_res <- residuals(g_1ha)
# looking for near zero edf
gam(g_1ha_res ~ s(log_size_prev, k = 10, bs = "cs"), gamma = 1.4,
data = model.frame(g_1ha)) #fine
##
## Family: gaussian
## Link function: identity
##
## Formula:
## g_1ha_res ~ s(log_size_prev, k = 10, bs = "cs")
##
## Estimated degrees of freedom:
## 1.84 total = 2.84
##
## GCV score: 1.331934
gam(g_1ha_res ~ s(spei_history, L,
bs = "cb",
k = c(3, 35),
xt = list(bs = "cr")),
gamma = 1.4,
select = TRUE,
data = model.frame(g_1ha)) #fine
##
## Family: gaussian
## Link function: identity
##
## Formula:
## g_1ha_res ~ s(spei_history, L, bs = "cb", k = c(3, 35), xt = list(bs = "cr"))
##
## Estimated degrees of freedom:
## 0.584 total = 1.58
##
## GCV score: 1.3363
summary(g_1ha)
##
## Family: Scaled t(4.727,0.477)
## Link function: identity
##
## Formula:
## log_size ~ flwr_prev + s(log_size_prev, bs = "cr") + s(plot,
## bs = "re") + s(spei_history, L, bs = "cb", k = c(3, 35),
## xt = list(bs = "cr"))
##
## Parametric coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 4.37803 0.10625 41.205 <2e-16 ***
## flwr_prev1 0.08474 0.03600 2.354 0.0186 *
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Approximate significance of smooth terms:
## edf Ref.df F p-value
## s(log_size_prev) 4.186 9 2943.17 < 2e-16 ***
## s(plot) 2.705 3 20.38 < 2e-16 ***
## s(spei_history,L) 18.385 21 23.37 8.05e-07 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## R-sq.(adj) = 0.696 Deviance explained = 63.2%
## fREML = 12197 Scale est. = 1 n = 8527
draw(g_1ha)
## Warning: Removed 910 rows containing non-finite values (stat_contour).
gratia::appraise(g_cf)
pdf(NULL)
gam.check(g_cf)
##
## Method: fREML Optimizer: perf newton
## full convergence after 14 iterations.
## Gradient range [-3.633122e-10,9.547658e-07]
## (score 41017.73 & scale 1).
## Hessian positive definite, eigenvalue range [0.1892746,2.035307].
## Model rank = 87 / 87
##
## Basis dimension (k) checking results. Low p-value (k-index<1) may
## indicate that k is too low, especially if edf is close to k'.
##
## k' edf k-index p-value
## s(log_size_prev) 9.00 5.72 0.98 0.1
## s(plot) 6.00 4.08 NA NA
## s(spei_history,L) 70.00 13.64 NA NA
dev.off()
## quartz_off_screen
## 2
g_cf_res <- residuals(g_cf)
# looking for near zero edf
gam(g_cf_res ~ s(log_size_prev, k = 10, bs = "cs"), gamma = 1.4,
data = model.frame(g_cf)) #fine
##
## Family: gaussian
## Link function: identity
##
## Formula:
## g_cf_res ~ s(log_size_prev, k = 10, bs = "cs")
##
## Estimated degrees of freedom:
## 2.53 total = 3.53
##
## GCV score: 1.406485
gam(g_cf_res ~ s(spei_history, L,
bs = "cb",
k = c(3, 35),
xt = list(bs = "cr")),
gamma = 1.4,
select = TRUE,
data = model.frame(g_cf)) #fine
##
## Family: gaussian
## Link function: identity
##
## Formula:
## g_cf_res ~ s(spei_history, L, bs = "cb", k = c(3, 35), xt = list(bs = "cr"))
##
## Estimated degrees of freedom:
## 15 total = 15.97
##
## GCV score: 1.412272
summary(g_cf)
##
## Family: Scaled t(3.903,0.416)
## Link function: identity
##
## Formula:
## log_size ~ flwr_prev + s(log_size_prev, bs = "cr") + s(plot,
## bs = "re") + s(spei_history, L, bs = "cb", k = c(3, 35),
## xt = list(bs = "cr"))
##
## Parametric coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 4.35421 0.02597 167.665 <2e-16 ***
## flwr_prev1 0.02795 0.01482 1.886 0.0594 .
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Approximate significance of smooth terms:
## edf Ref.df F p-value
## s(log_size_prev) 5.719 9 17697.579 <2e-16 ***
## s(plot) 4.077 5 6.703 <2e-16 ***
## s(spei_history,L) 13.645 19 101.657 <2e-16 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## R-sq.(adj) = 0.784 Deviance explained = 69.1%
## fREML = 41018 Scale est. = 1 n = 28849
draw(g_cf)
## Warning: Removed 968 rows containing non-finite values (stat_contour).
To check that differences are not purely due to sample size differences, particularly that lower edf in continuous forests is due to higher sample size.
summary(g_1ha)$edf[3]
## [1] 18.38534
summary(g_cf)$edf[3]
## [1] 13.64452
summary(g_cf_sub)$edf[3]
## [1] 12.33543
edf of subsample is similar to full dataset, despite differences in sample size.
draw(g_cf_sub)
## Warning: Removed 910 rows containing non-finite values (stat_contour).
Crossbasis surface looks nearly identical.
f_1ha_res <- qresid(f_1ha)
par(mfrow = c(2,2))
#histogram
plot(density(f_1ha_res))
#QQ plot
qqnorm(f_1ha_res); qqline(f_1ha_res)
#Fitted vs. residuals--binned residuals plot
binnedplot(fitted(f_1ha), residuals(f_1ha, type = "response"))
pdf(NULL)
gam.check(f_1ha)
##
## Method: fREML Optimizer: perf newton
## full convergence after 18 iterations.
## Gradient range [-6.392746e-09,4.200497e-08]
## (score 10584.82 & scale 1).
## Hessian positive definite, eigenvalue range [0.3677846,6.412113].
## Model rank = 1351 / 1351
##
## Basis dimension (k) checking results. Low p-value (k-index<1) may
## indicate that k is too low, especially if edf is close to k'.
##
## k' edf k-index p-value
## s(log_size_prev) 9.00 3.32 0.99 0.99
## s(plot) 4.00 2.51 NA NA
## s(spei_history,L) 70.00 9.76 NA NA
## s(ha_id_number) 1266.00 67.21 NA NA
dev.off()
## quartz_off_screen
## 2
# looking for near zero edf
gam(f_1ha_res ~ s(log_size_prev, k = 10, bs = "cs"), gamma = 1.4,
data = model.frame(f_1ha)) #fine
##
## Family: gaussian
## Link function: identity
##
## Formula:
## f_1ha_res ~ s(log_size_prev, k = 10, bs = "cs")
##
## Estimated degrees of freedom:
## 0 total = 1
##
## GCV score: 0.986224
gam(f_1ha_res ~ s(spei_history, L,
bs = "cb",
k = c(3, 35),
xt = list(bs = "cr")),
gamma = 1.4,
select = TRUE,
data = model.frame(f_1ha)) #fine
##
## Family: gaussian
## Link function: identity
##
## Formula:
## f_1ha_res ~ s(spei_history, L, bs = "cb", k = c(3, 35), xt = list(bs = "cr"))
##
## Estimated degrees of freedom:
## 0.709 total = 1.71
##
## GCV score: 0.9861867
summary(f_1ha)
##
## Family: binomial
## Link function: logit
##
## Formula:
## flwr ~ flwr_prev + s(log_size_prev, bs = "cr") + s(plot, bs = "re") +
## s(spei_history, L, bs = "cb", k = c(3, 35), xt = list(bs = "cr")) +
## s(ha_id_number, bs = "re")
##
## Parametric coefficients:
## Estimate Std. Error z value Pr(>|z|)
## (Intercept) -5.9700 0.5369 -11.119 < 2e-16 ***
## flwr_prev1 0.7084 0.1739 4.075 4.6e-05 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Approximate significance of smooth terms:
## edf Ref.df Chi.sq p-value
## s(log_size_prev) 3.318 9 364.88 < 2e-16 ***
## s(plot) 2.505 3 20.39 0.00108 **
## s(spei_history,L) 9.758 21 88.71 < 2e-16 ***
## s(ha_id_number) 67.206 1265 85.76 0.00092 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## R-sq.(adj) = 0.29 Deviance explained = 44.5%
## fREML = 10585 Scale est. = 1 n = 8527
draw(f_1ha)
## Warning: Removed 910 rows containing non-finite values (stat_contour).
f_cf_res <- qresid(f_cf)
par(mfrow = c(2,2))
#histogram
plot(density(f_cf_res))
#QQ plot
qqnorm(f_cf_res); qqline(f_cf_res)
#Fitted vs. residuals--binned residuals plot
binnedplot(fitted(f_cf), residuals(f_cf, type = "response"))
pdf(NULL)
gam.check(f_cf)
##
## Method: fREML Optimizer: perf newton
## full convergence after 10 iterations.
## Gradient range [-2.011692e-05,8.805875e-06]
## (score 36174.51 & scale 1).
## Hessian positive definite, eigenvalue range [2.007401e-05,48.30054].
## Model rank = 4477 / 4477
##
## Basis dimension (k) checking results. Low p-value (k-index<1) may
## indicate that k is too low, especially if edf is close to k'.
##
## k' edf k-index p-value
## s(log_size_prev) 9.0 5.7 1 1
## s(plot) 6.0 3.9 NA NA
## s(spei_history,L) 70.0 10.3 NA NA
## s(ha_id_number) 4390.0 351.7 NA NA
dev.off()
## quartz_off_screen
## 2
# looking for near zero edf
gam(f_cf_res ~ s(log_size_prev, k = 10, bs = "cs"), gamma = 1.4,
data = model.frame(f_cf)) #fine
##
## Family: gaussian
## Link function: identity
##
## Formula:
## f_cf_res ~ s(log_size_prev, k = 10, bs = "cs")
##
## Estimated degrees of freedom:
## 0 total = 1
##
## GCV score: 0.9989736
gam(f_cf_res ~ s(spei_history, L,
bs = "cb",
k = c(3, 35),
xt = list(bs = "cr")),
gamma = 1.4,
select = TRUE,
data = model.frame(f_cf)) #fine
##
## Family: gaussian
## Link function: identity
##
## Formula:
## f_cf_res ~ s(spei_history, L, bs = "cb", k = c(3, 35), xt = list(bs = "cr"))
##
## Estimated degrees of freedom:
## 1.4 total = 2.4
##
## GCV score: 0.9988722
summary(f_cf)
##
## Family: binomial
## Link function: logit
##
## Formula:
## flwr ~ flwr_prev + s(log_size_prev, bs = "cr") + s(plot, bs = "re") +
## s(spei_history, L, bs = "cb", k = c(3, 35), xt = list(bs = "cr")) +
## s(ha_id_number, bs = "re")
##
## Parametric coefficients:
## Estimate Std. Error z value Pr(>|z|)
## (Intercept) -5.39245 0.24028 -22.442 < 2e-16 ***
## flwr_prev1 0.53988 0.08756 6.166 7.01e-10 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Approximate significance of smooth terms:
## edf Ref.df Chi.sq p-value
## s(log_size_prev) 5.698 9 2104.21 <2e-16 ***
## s(plot) 3.900 5 69.33 <2e-16 ***
## s(spei_history,L) 10.325 21 425.95 <2e-16 ***
## s(ha_id_number) 351.687 4389 494.24 <2e-16 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## R-sq.(adj) = 0.267 Deviance explained = 40.6%
## fREML = 36175 Scale est. = 1 n = 28849
draw(f_cf)
## Warning: Removed 968 rows containing non-finite values (stat_contour).
To check that differences are not purely due to sample size differences, particularly that lower edf in continuous forests is due to higher sample size. For flowering, edf is actually slightly higher in CF with a larger sample size. With subsample, edf are more similar.
summary(f_1ha)$edf[3]
## [1] 9.758276
summary(f_cf)$edf[3]
## [1] 10.32503
summary(f_cf_sub)$edf[3]
## [1] 7.419691
draw(f_cf_sub)
## Warning: Removed 910 rows containing non-finite values (stat_contour).
Crossbasis surface looks extremely similar.
Reproducibility receipt
## datetime
Sys.time()
## [1] "2021-08-12 16:39:45 EDT"
## repository
if(requireNamespace('git2r', quietly = TRUE)) {
git2r::repository()
} else {
c(
system2("git", args = c("log", "--name-status", "-1"), stdout = TRUE),
system2("git", args = c("remote", "-v"), stdout = TRUE)
)
}
## Local: revisions /Users/scottericr/Documents/HeliconiaDemography
## Remote: revisions @ origin (https://github.com/BrunaLab/HeliconiaDemography.git)
## Head: [5bf30c2] 2021-08-12: improve (technical) explanation of models. Closes #87
## session info
sessionInfo()
## R version 4.0.2 (2020-06-22)
## Platform: x86_64-apple-darwin17.0 (64-bit)
## Running under: macOS 10.16
##
## Matrix products: default
## BLAS: /Library/Frameworks/R.framework/Versions/4.0/Resources/lib/libRblas.dylib
## LAPACK: /Library/Frameworks/R.framework/Versions/4.0/Resources/lib/libRlapack.dylib
##
## locale:
## [1] en_US.UTF-8/en_US.UTF-8/en_US.UTF-8/C/en_US.UTF-8/en_US.UTF-8
##
## attached base packages:
## [1] stats4 parallel stats graphics grDevices datasets utils
## [8] methods base
##
## other attached packages:
## [1] arm_1.11-2 lme4_1.1-27.1 Matrix_1.3-3 MASS_7.3-54
## [5] qqplotr_0.0.5 Hmisc_4.5-0 Formula_1.2-4 survival_3.2-11
## [9] lattice_0.20-44 readxl_1.3.1 colorspace_2.0-1 rmarkdown_2.7
## [13] statmod_1.4.36 latex2exp_0.5.0 gratia_0.6.0.9112 broom_0.7.6
## [17] patchwork_1.1.1 glue_1.4.2 bbmle_1.0.23.1 dlnm_2.4.5
## [21] mgcv_1.8-36 nlme_3.1-152 lubridate_1.7.10 janitor_2.1.0
## [25] tsModel_0.6 SPEI_1.7 lmomco_2.3.6 tsibble_1.0.1
## [29] forcats_0.5.1 stringr_1.4.0 dplyr_1.0.5 purrr_0.3.4
## [33] readr_1.4.0 tidyr_1.1.3 tibble_3.1.1 ggplot2_3.3.5
## [37] tidyverse_1.3.1 here_1.0.1 tarchetypes_0.2.0 targets_0.4.2
## [41] dotenv_1.0.3 conflicted_1.0.4
##
## loaded via a namespace (and not attached):
## [1] backports_1.2.1 igraph_1.2.6 splines_4.0.2
## [4] digest_0.6.27 htmltools_0.5.1.1 fansi_0.4.2
## [7] magrittr_2.0.1 checkmate_2.0.0 memoise_2.0.0
## [10] cluster_2.1.2 modelr_0.1.8 bdsmatrix_1.3-4
## [13] anytime_0.3.9 jpeg_0.1-8.1 rvest_1.0.0
## [16] haven_2.4.1 xfun_0.22 callr_3.7.0
## [19] crayon_1.4.1 jsonlite_1.7.2 gtable_0.3.0
## [22] DEoptimR_1.0-8 abind_1.4-5 scales_1.1.1
## [25] mvtnorm_1.1-1 DBI_1.1.1 Rcpp_1.0.6
## [28] isoband_0.2.4 htmlTable_2.1.0 foreign_0.8-81
## [31] htmlwidgets_1.5.3 httr_1.4.2 RColorBrewer_1.1-2
## [34] ellipsis_0.3.2 pkgconfig_2.0.3 farver_2.1.0
## [37] nnet_7.3-16 sass_0.3.1 dbplyr_2.1.1
## [40] utf8_1.2.1 tidyselect_1.1.1 labeling_0.4.2
## [43] rlang_0.4.11 munsell_0.5.0 cellranger_1.1.0
## [46] tools_4.0.2 cachem_1.0.4 cli_2.5.0
## [49] generics_0.1.0 evaluate_0.14 fastmap_1.1.0
## [52] yaml_2.2.1 goftest_1.2-2 processx_3.5.2
## [55] knitr_1.33 fs_1.5.0 robustbase_0.93-7
## [58] mvnfast_0.2.5.1 xml2_1.3.2 compiler_4.0.2
## [61] rstudioapi_0.13 png_0.1-7 reprex_2.0.0
## [64] bslib_0.2.4 stringi_1.6.2 highr_0.9
## [67] ps_1.6.0 nloptr_1.2.2.2 vctrs_0.3.8
## [70] pillar_1.6.0 lifecycle_1.0.0 jquerylib_0.1.4
## [73] data.table_1.14.0 R6_2.5.0 latticeExtra_0.6-29
## [76] renv_0.13.2 gridExtra_2.3 Lmoments_1.3-1
## [79] codetools_0.2-18 boot_1.3-28 assertthat_0.2.1
## [82] rprojroot_2.0.2 withr_2.4.2 hms_1.1.0
## [85] grid_4.0.2 rpart_4.1-15 coda_0.19-4
## [88] minqa_1.2.4 snakecase_0.11.0 git2r_0.28.0
## [91] numDeriv_2016.8-1.1 base64enc_0.1-3